4,350 research outputs found

    A Conversation with Martin Bradbury Wilk

    Full text link
    Martin Bradbury Wilk was born on December 18, 1922, in Montr\'{e}al, Qu\'{e}bec, Canada. He completed a B.Eng. degree in Chemical Engineering in 1945 at McGill University and worked as a Research Engineer on the Atomic Energy Project for the National Research Council of Canada from 1945 to 1950. He then went to Iowa State College, where he completed a M.Sc. and a Ph.D. degree in Statistics in 1953 and 1955, respectively. After a one-year post-doc with John Tukey, he became Assistant Director of the Statistical Techniques Research Group at Princeton University in 1956--1957, and then served as Professor and Director of Research in Statistics at Rutgers University from 1959 to 1963. In parallel, he also had a 14-year career at Bell Laboratories, Murray Hill, New Jersey. From 1956 to 1969, he was in turn Member of Technical Staff, Head of the Statistical Models and Methods Research Department, and Statistical Director in Management Sciences Research. He wrote a number of influential papers in statistical methodology during that period, notably testing procedures for normality (the Shapiro--Wilk statistic) and probability plotting techniques for multivariate data. In 1970, Martin moved into higher management levels of the American Telephone and Telegraph (AT&T) Company. He occupied various positions culminating as Assistant Vice-President and Director of Corporate Planning. In 1980, he returned to Canada and became the first professional statistician to serve as Chief Statistician. His accomplishments at Statistics Canada were numerous and contributed to a resurgence of the institution's international standing. He played a crucial role in the reinstatement of the Cabinet-cancelled 1986 Census.Comment: Published in at http://dx.doi.org/10.1214/08-STS272 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Comprehensive Monitor-Oriented Compensation Programming

    Full text link
    Compensation programming is typically used in the programming of web service compositions whose correct implementation is crucial due to their handling of security-critical activities such as financial transactions. While traditional exception handling depends on the state of the system at the moment of failure, compensation programming is significantly more challenging and dynamic because it is dependent on the runtime execution flow - with the history of behaviour of the system at the moment of failure affecting how to apply compensation. To address this dynamic element, we propose the use of runtime monitors to facilitate compensation programming, with monitors enabling the modeller to be able to implicitly reason in terms of the runtime control flow, thus separating the concerns of system building and compensation modelling. Our approach is instantiated into an architecture and shown to be applicable to a case study.Comment: In Proceedings FESCA 2014, arXiv:1404.043

    Device-Centric Monitoring for Mobile Device Management

    Full text link
    The ubiquity of computing devices has led to an increased need to ensure not only that the applications deployed on them are correct with respect to their specifications, but also that the devices are used in an appropriate manner, especially in situations where the device is provided by a party other than the actual user. Much work which has been done on runtime verification for mobile devices and operating systems is mostly application-centric, resulting in global, device-centric properties (e.g. the user may not send more than 100 messages per day across all applications) being difficult or impossible to verify. In this paper we present a device-centric approach to runtime verify the device behaviour against a device policy with the different applications acting as independent components contributing to the overall behaviour of the device. We also present an implementation for Android devices, and evaluate it on a number of device-centric policies, reporting the empirical results obtained.Comment: In Proceedings FESCA 2016, arXiv:1603.0837

    Extensible Technology-Agnostic Runtime Verification

    Full text link
    With numerous specialised technologies available to industry, it has become increasingly frequent for computer systems to be composed of heterogeneous components built over, and using, different technologies and languages. While this enables developers to use the appropriate technologies for specific contexts, it becomes more challenging to ensure the correctness of the overall system. In this paper we propose a framework to enable extensible technology agnostic runtime verification and we present an extension of polyLarva, a runtime-verification tool able to handle the monitoring of heterogeneous-component systems. The approach is then applied to a case study of a component-based artefact using different technologies, namely C and Java.Comment: In Proceedings FESCA 2013, arXiv:1302.478

    Linking gender differences in parenting to a typology of family parenting styles and adolescent developmental outcomes

    Get PDF
    This dissertation used data from a sample of 451 families living in Central Iowa to address four research questions. The first research question addressed gender differences in insight regarding one\u27s own parenting practices. Results provided evidence that neither mothers nor fathers have a great deal of insight in this area. The second research question focused on how mothers and fathers differ in their levels of various parenting behaviors. Results showed that mothers engage in more child monitoring. Mixed results for warmth and consistent discipline precluded drawing any conclusions regarding mother-father differences. Next, I focused on the ways in which mothers and fathers differ with regard to parenting style. Mothers were more likely than fathers to exhibit authoritative parenting. Both parents and children agreed that there is very little authoritarian parenting is rare, but that an indulgent style is very common. The third research question focused on ways in which mothers\u27 and fathers\u27 parenting styles combine to form family parenting styles. Observers were more likely to categorize both parents as authoritative than was the case for parents\u27 self-report or child reports. Over one-third of children reported that they had two indulgent parents and zero reported that they had two authoritarian parents. Only 1% of parents\u27 self-reports indicated that there were two authoritarian parents in the household. While the majority of families could be classified by using only the four Maccoby and Martin (1983) types, there was a significant minority that could not be classified, regardless of reporter. By including all of the most commonly occurring family parenting styles, the proportion of families included increased dramatically for each reporter. Finally, family parenting styles were used to predict child adjustment (e.g., delinquency, depression, school commitment). Consistent with previous research, authoritative parenting was associated with the most positive child outcomes. In the absence of two authoritative parents, having one authoritative parent paired with a non-authoritative parent produced better outcomes than combinations without an authoritative parent. Uninvolved parenting, especially on the part of mothers, was associated with the most negative child outcomes

    Random subgraphs of finite graphs: I. The scaling window under the triangle condition

    Full text link
    We study random subgraphs of an arbitrary finite connected transitive graph G\mathbb G obtained by independently deleting edges with probability 1p1-p. Let VV be the number of vertices in G\mathbb G, and let Ω\Omega be their degree. We define the critical threshold pc=pc(G,λ)p_c=p_c(\mathbb G,\lambda) to be the value of pp for which the expected cluster size of a fixed vertex attains the value λV1/3\lambda V^{1/3}, where λ\lambda is fixed and positive. We show that for any such model, there is a phase transition at pcp_c analogous to the phase transition for the random graph, provided that a quantity called the triangle diagram is sufficiently small at the threshold pcp_c. In particular, we show that the largest cluster inside a scaling window of size |p-p_c|=\Theta(\cn^{-1}V^{-1/3}) is of size Θ(V2/3)\Theta(V^{2/3}), while below this scaling window, it is much smaller, of order O(ϵ2log(Vϵ3))O(\epsilon^{-2}\log(V\epsilon^3)), with \epsilon=\cn(p_c-p). We also obtain an upper bound O(\cn(p-p_c)V) for the expected size of the largest cluster above the window. In addition, we define and analyze the percolation probability above the window and show that it is of order \Theta(\cn(p-p_c)). Among the models for which the triangle diagram is small enough to allow us to draw these conclusions are the random graph, the nn-cube and certain Hamming cubes, as well as the spread-out nn-dimensional torus for n>6n>6

    Recovery within long running transactions

    Get PDF
    As computer systems continue to grow in complexity, the possibilities of failure increase. At the same time, the increase in computer system pervasiveness in day-to-day activities brought along increased expectations on their reliability. This has led to the need for effective and automatic error recovery techniques to resolve failures. Transactions enable the handling of failure propagation over concurrent systems due to dependencies, restoring the system to the point before the failure occurred. However, in various settings, especially when interacting with the real world, reversal is not possible. The notion of compensations has been long advocated as a way of addressing this issue, through the specification of activities which can be executed to undo partial transactions. Still, there is no accepted standard theory; the literature offers a plethora of distinct formalisms and approaches. In this survey, we review the compensations from a theoretical point of view by: (i) giving a historic account of the evolution of compensating transactions; (ii) delineating and describing a number of design options involved; (iii) presenting a number of formalisms found in the literature, exposing similarities and differences; (iv) comparing formal notions of compensation correctness; (v) giving insights regarding the application of compensations in practice; and (vi) discussing current and future research trends in the area.peer-reviewe

    A compensating transaction example in twelve notations

    Get PDF
    The scenario of business computer systems changed with the advent of cross-entity computer interactions: computer systems no longer had the limited role of storing and processing data, but became themselves the players which actuated real-life actions. These advancements rendered the traditional transaction mechanism insufficient to deal with these new complexities of longer multi-party transactions. The concept of compensations has long been suggested as a solution, providing the possibility of executing “counter”-actions which semantically undo previously completed actions in case a transaction fails. There are numerous design options related to compensations particularly when deciding the strategy of ordering compensating actions. Along the years, various models which include compensations have emerged, each tackling in its own way these options. In this work, we review a number of notations which handle compensations by going through their syntax and semantics — highlighting the distinguishing features — and encoding a typical compensating transaction example in terms of each of these notations.peer-reviewe
    corecore